59 research outputs found

    Change of perspectives (News Performance)

    Get PDF
    Field of application/theoretical foundation: Analyses of change of perspectives are theoretically linked to the news performance and democratic function of the media (McQuail, 1992). This construct is related to viewpoint diversity and the normative expectation that different views should be presented in news coverage (Napoli & Gillis, 2008). In addition, more recent analysis focus on different perspective articulated in user comments, often linked to theories of deliberation (Baden & Springer, 2015). References/combination with other methods of data collection: Perspective change in news coverage is measured i) directly (e.g., by asking whether change of perspective is presented in an article) or i) indirectly by coding different perspective (e.g. statements including different viewpoints). Indirect measures can also be used in automated approaches (Möller et al., 2018).  Example studies: Baden & Springer (2014); Humprecht (2016)   Table 1. Study summaries Author(s) Sample Unit of Analysis Values Reliability Baden & Springer (2014) Content type: Online news coverage on selected key events and user comments Outlet/country: 5 German newspapers (SĂŒddeutsche Zeitung, Die Welt, TZ, Die Zeit, Spiegel) Sampling period: Feb– July 2012 Sample size: 42 news articles, 384 user comments News article: max. 2 main interpretative frames (the text’s ‘central organizing idea’) User comments: main frame Object of problem definition Logic of evaluation: inspired (Good is what is true, divine & amazing) popular (Good is what the people want) moral (Good is what is social, fair, & moral) civic (Good is what is accepted & conventional) economic (Good is what is profitable & creates value) functional (good is what works) ecological (good is what is sustainable & natural)   Logic of (inter)action: believing (interactions between the mind & the world) desire (interaction btw the mind & objects) ought (interaction btw the mind & people) negotiation (interaction btw people & the social world) exchange (interactions btw people & objects) technology (interactions btw objects & the world) life (interactions btw people & the natural world)   Authors coded coverage consensually User comments: Holsti = 0.78 Problem definition’s object: Holsti = 0.60 Logic of Action: Holsti = 0.56 Evaluation logic: Holsti = 1 Humprecht (2016) Content type: Political routine-period news Outlet/ country: 48 online news outlets from six countries (CH, DE, FR, IT, UK, US) Sampling period: June – July 2012 Sample size: N= 1660 Unit of analysis: Political news items (make reference to a political actor, e.g. politician, party, institution in headline, sub?headline, in first paragraph or in an accompanying visual) News items are all journalistic articles mentioned on the front page (‘first layer’ of the website) that are linked to the actual story (on second layer of website) Only one perspective (because underlying topic is uncontroversial) One perspective (of a debated/controversial issue, no opposition voice) Different perspectives mentioned (different sides, voices, camps, perspectives mentioned but not elaborated) Co-presence of speakers with opposing views (expressed in separate utterances) in the same article. Story shows clear attempt at giving a balanced, fair account of debated/controversial issue by including diverse viewpoints and statements) Cohen’s kappa = 0.64   References Baden, C., & Springer, N. (2014). Com(ple)menting the news on the financial crisis: The contribution of news users’ commentary to the diversity of viewpoints in the public debate. European Journal of Communication. https://doi.org/10.1177/0267323114538724 Baden, C., & Springer, N. (2015). Conceptualizing viewpoint diversity in news discourse. Journalism, 1–19. https://doi.org/10.1177/1464884915605028 Humprecht, E. (2016). Shaping Online News Performance. In Palgrave Macmillan. Palgrave Macmillan UK. https://doi.org/10.1007/978-1-137-56668-3 McQuail, D. (1992). Media Performance: Mass Communication and the Public Interest. Sage Publications. Möller, J., Trilling, D., Helberger, N., & van Es, B. (2018). Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. Information Communication and Society, 21(7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076 Napoli, P., & Gillis, N. (2008). Media Ownership and the Diversity Index: Outlining a Social Science Research Agenda (No. 5; McGannon Center Working Paper Series)

    Cause/antecedents/history (News Performance)

    Get PDF
    Field of application/theoretical foundation: Analyses using the constructs cause/antecedents/history in news content are theoretically related to news performance and the democratic function of the media (McQuail, 1992). This construct is linked to professional standards and the normative assumption that the media should provide the audience with background information on current events and issues (Westerstahl, 1983). For example, news can be used to explain how a particular problem occurred, what happened beforehand and what the concrete reasons are for the current situation. References/combination with other methods of data collection: The analysis of reporting on the causes, background and history of events is complex and requires an understanding of the context and the relationships established by the journalist. As a result of this complexity, no automated measurement procedures have yet been developed. Example study: Humprecht (2016)   Table 1. Study summary Author(s) Sample Unit of Analysis Values Reliability Humprecht (2016) Content type: Political routine-period news Outlet/ country: 48 online news outlets from six countries (CH, DE, FR, IT, UK, US) Sampling period: June – July 2012 Sample size: N = 1660 Unit of analysis: Political news items (make reference to a political actor, e.g. politician, party, institution in headline, sub?headline, in first paragraph or in an accompanying visual) News items are all journalistic articles mentioned on the front page (‘first layer’ of the website) that are linked to the actual story (on second layer of website) Not mentioned Rudimental mention (e.g. reference to previous events without explanation) Mentioned in detail (e.g. explanation of historical events, causes, etc.)   Cohen’s kappa average ≄ 0.69   References Humprecht, E. (2016). Shaping Online News Performance. In Palgrave Macmillan. Palgrave Macmillan UK. https://doi.org/10.1007/978-1-137-56668-3 McQuail, D. (1992). Media Performance: Mass Communication and the Public Interest. Sage Publications. Westerstahl, J. (1983). Objective news reporting: General premises. Communication Research, 10, 403–424

    Actor diversity (News Performance)

    Get PDF
    Field of application/theoretical foundation: Analyses of actor diversity are theoretically linked to news performance and the democratic media function of integration (Imhof, 2010). This construct is related to the normative assumption that news content should represent society as a whole and thus cover a large variety of societal groups (Boydstun et al., 2014). More recent studies also focus on the influence of algorithms on news diversity (Möller et al., 2018). Analyses are often carried out in three steps. First, all actors are (inductively or deductively) identified. Second, actors are coded according to predefined lists. Third, the level of diversity is determined using diversity indices (van Cuilenburg, 2007). Diversity indices are calculated at article level (internal diversity) or at the organizational level (external diversity) to compare diversity between news articles of a single outlet or between different news outlets. References/combination with other methods of data collection: Studies on actor diversity use both manual and automated content analysis to investigate the occurrence of actors and in texts. They use inductive or deductive approaches and/or a combination of both to identify actor categories and extend predefined lists of actors (van Hoof et al., 2014). Example studies: Masini et al. (2018); Humprecht & Esser (2018)   Table 1. Summary of studies on actor diversity Author(s) Sample Unit of Analysis Values Reliability Masini et al. (2018) Content type: news about immigration Outlet/ country: 2 news outlets in four countries (BE, DE, IT, UK) Sampling period: January 2013 to April 2014 Sample size: N=2490) Unit of analysis: news article No. of actors coded: max. 10 quoted or paraphrased actors per article Level of analysis: article and news outlet level Diversity measure: Simpson’s diversity index National politics, international politics, public opinion and ordinary people, immigrants, civil society, public agencies/ organizations, judiciary/police/military, religion, business/corporate/finance, journalists/ media celebrities, traffickers/smugglers Krippendorff’s alpha average ≄0.78 Humprecht & Esser (2018) Content type: Political routine-period news Outlet/ country: 48 online news outlets from six countries (CH, DE, FR, IT, UK, US) Sampling period: June – July 2012 Sample size: N= 1660 Unit of analysis: Political news items (make reference to a political actor, e.g. politician, party, institution in headline, sub?headline, in first paragraph or in an accompanying visual) News items are all journalistic articles mentioned on the front page (‘first layer’ of the website) that are linked to the actual story (on second layer of website) No. of actors coded: Max. 5 main actors (mentioned twice) per news item measured Level of analysis: news outlet level Diversity measure: relative entropy Executive (head of state and national government), legislative (national parliament and national parties), judicial (national courts and judges), national administration (prosecution, regional government authority, and police or army), foreign politicians (foreign heads of state and other foreign politicians), and international organizations (supranational and international organizations) Cohen’s kappa average ≄0.76   References Boydstun, A. E., Bevan, S., & Thomas, H. F. (2014). The importance of attention diversity and how to measure it. Policy Studies Journal, 42(2), 173–196. https://doi.org/10.1111/psj.12055 Humprecht, E., & Esser, F. (2018). Diversity in Online News: On the importance of ownership types and media system types. Journalism Studies, 19(12), 1825–1847. https://doi.org/10.1080/1461670X.2017.1308229 Imhof, K. (2010). Die QualitĂ€t der Medien in der Demokratie. In fög – Forschungsbereich Öffentlichkeit und Gesellschaft (Ed.), Jahrbuch 2010: QualitĂ€t der Medien QualitĂ€t der Medien (pp. 11–20). Schwabe. https://doi.org/10.1007/978-3-322-97101-2_1 Masini, A., Van Aelst, P., Zerback, T., Reinemann, C., Mancini, P., Mazzoni, M., Damiani, M., & Coen, S. (2018). Measuring and Explaining the Diversity of Voices and Viewpoints in the News: A comparative study on the determinants of content diversity of immigration news. Journalism Studies, 19(15), 2324–2343. https://doi.org/10.1080/1461670X.2017.1343650 Möller, J., Trilling, D., Helberger, N., & van Es, B. (2018). Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. Information Communication and Society, 21(7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076 van Cuilenburg, J. (2007). Media Diversity, Competition and Concentration: Concepts and Theories. In E. de Bens (Ed.), Media Between Culture and Commerce (pp. 25–54). Intellect. van Hoof, A., Jacobi, C., Ruigrok, N., & van Atteveldt, W. (2014). Diverse politics, diverse news coverage? A longitudinal study of diversity in Dutch political news during two decades of election campaigns. European Journal of Communication, 29(6), 668–686. https://doi.org/10.1177/026732311454571

    Journalism In 140 Characters. European Journalism Observatory

    Full text link

    Publishers/sources (Disinformation)

    Get PDF
    Recent research has mainly used two approaches to identify publishers or sources of disinformation: First, alternative media are identified as potential publishers of disinformation. Second, potential publishers of disinformation are identified via fact-checking websites. Samples created using those approaches can partly overlap. However, the two approaches differ in terms of validity and comprehensiveness of the identified population. Sampling of alternative media outlets is theory-driven and allows for cross-national comparison. However, researchers face the challenge to identify misinforming content published by alternative media outlets. In contrast, fact-checked content facilitates the identification of a given disinformation population; however, fact-checker often have a publication bias focusing on a small range of (elite) actors or sources (e.g. individual blogs, hyper partisan news outlets, or politicians). In both approaches it is important to describe, compare and, if possible, assign the outlets to already existing categories in order to enable a temporal and spatial comparison. Approaches to identify sources/publishers: Besides the operationalization of specific variables analyzed in the field of disinformation, the sampling procedure presents a crucial element to operationalize disinformation itself. Following the approach of detecting disinformation through its potential sources or publishers (Li, 2020), research analyzes alternative media (Bachl, 2018; Boberg, Quandt, Schatto-Eckrodt, & Frischlich, 2020; Heft et al., 2020) or identifies a various range of actors or domains via fact-checking sites (Allcott & Gentzkow, 2017; Grinberg et al., (2019); Guess, Nyhan & Reifler, 2018). Those two approaches are explained in the following. Alternative media as sources/publishers The following procedure summarizes the approaches used in current research for the identification of relevant alternative media outlets (following Bachl, 2018; Boberg et al., 2020; Heft et al., 2020). Snowball sampling to define the universe of alternative media outlets may consists of the following steps: Sample of outlets identified in previous research Consultation of search engines and news articles Departing from a potential prototype, websites provide information about digital metrics (Alexa.com or Similarweb.com). For example, Similarweb.com shows three relevant lists per outlet: “Top Referring Sites” (which websites are sending traffic to this site), “Also visited websites” (overlap with users of other websites), and “Competitors & Similar Sites” (similarity defined by the company) Definition of alternative media outlets Journalistic outlets (for example, excluding blogs and forums) with current, non-fictional and regular content Self-description of the outlets in a so-called “about us” section or in a mission statement, which underlines the relational perspective of being an alternative to the mainstream media. This description may for example include keywords such as alternative, independent, unbiased, critical or is in line with statements like “presenting the real/true views/facts” or “covering what the mainstream media hides/leaves out”. Use of predefined dimensions and categories of alternative media (Frischlich, Klapproth, & Brinkschulte, 2020; Holt, Ustad Figenschou, & Frischlich, 2019) Sources/publishers via fact-checking sites Following previous research in the U.S., Guess et al. (2018) identified “Fake news domains” (focusing on pro-Trump and pro-Clinton content) which published two or more articles that were coded as “fake news” by fact-checkers (derived from Allcott & Gentzkow, 2017). Grinberg et al. (2019) identified three classes of “fake news sources” differentiated by severity and frequency of false content (see Table 1). These three sources are part of a total of six website labels. The researchers additionally coded the sites into reasonable journalism, low quality journalism, satire and sites that were not applicable. The coders reached a percentual agreement of 60% for the labeling of the six categories, and 80% for the distinction of fake and non-fake categories.   Table 1. Three classes of “fake news sources” by Grinberg et al. (2019) Label Specification Identification Definition Black domains Based on previous studies: These domains published at least two articles which were declared as “fake news” by fact-checking sites. Based on preexisting lists constructed by fact-checkers, journalists and academics (Allcott & Gentzkow, 2017; Guess et al., 2018) Almost exclusively fabricated stories Red domains Major or frequent falsehoods that are in line with the site's political agenda. Prejudiced: Site presents falsehoods that focus upon one group with regards to race / religion / ethnicity / sexual orientation. Major or frequent falsehoods with little regard for the truth, but not necessarily to advance a certain political agenda. By the fact-checker snopes.com as sources of questionable claims; then manually differentiated between red and orange domains Falsehoods that clearly reflected a flawed editorial process Orange domains Moderate or occasional falsehoods to advance political agenda. Sensationalism: exaggerations to the extent that the article becomes misleading and inaccurate. Occasionally prejudiced articles: Site at times presents individual articles that contain falsehoods regarding race / religion / ethnicity / sexual orientation Openly states that the site may not be inaccurate, fake news, or cannot be trusted to provide factual news. Moderate or frequent falsehoods with little regard for the truth, but not necessarily to advance a certain political agenda. Conspiratorial: explanations of events that involves unwarranted suspicion of government cover ups or supernatural agents. By the fact-checker snopes.com as sources of questionable claims; then manually differentiated between red and orange domains Negligent and deceptive information but are less systemically flawed   Supplementary materials: https://science.sciencemag.org/content/sci/suppl/2019/01/23/363.6425.374.DC1/aau2706_Grinberg_SM.pdf (S5 and S6) Coding scheme and source labels: https://zenodo.org/record/2651401#.XxGtJJgzaUl (LazerLab-twitter-fake-news-replication-2c941b8\domains\domain_coding\data)   References Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. Bachl, M. (2018). (Alternative) media sources in AfD-centered Facebook discussions. Studies in Communication | Media, 7(2), 256–270. Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions. Digital Journalism, 6(2), 154–175. Boberg, S., Quandt, T., Schatto-Eckrodt, T., & Frischlich, L. (2020, April 6). Pandemic populism: Facebook pages of alternative news media and the corona crisis -- A computational content analysis. Retrieved from http://arxiv.org/pdf/2004.02566v3 Farkas, J., Schou, J., & Neumayer, C. (2018). Cloaked Facebook pages: Exploring fake Islamist propaganda in social media. New Media & Society, 20(5), 1850–1867. Frischlich, L., Klapproth, J., & Brinkschulte, F. (2020). Between mainstream and alternative – Co-orientation in right-wing populist alternative news media. In C. Grimme, M. Preuss, F. W. Takes, & A. Waldherr (Eds.), Lecture Notes in Computer Science. Disinformation in open online media (Vol. 12021, pp. 150–167). Cham: Springer International Publishing. Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. Presidential election. Science (New York, N.Y.), 363(6425), 374–378. Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1). https://doi.org/10.1126/sciadv.aau4586 Guess, A., Nyhan, B., & Reifler, J. (2018). Selective exposure to misinformation: Evidence from the consumption of fake news during the 2016 US presidential campaign. European Research Council, 9(3), 1–14. Heft, A., Mayerhöffer, E., Reinhardt, S., & KnĂŒpfer, C. (2020). Beyond Breitbart: Comparing right?wing digital news infrastructures in six Western democracies. Policy & Internet, 12(1), 20–45. Holt, K., Ustad Figenschou, T., & Frischlich, L. (2019). Key dimensions of alternative news media. Digital Journalism, 7(7), 860–869. Nelson, J. L., & Taneja, H. (2018). The small, disloyal fake news audience: The role of audience availability in fake news consumption. New Media & Society, 20(10), 3720–3737

    Topics (Disinformation)

    Get PDF
    The topic variable is used in research on disinformation to analyze thematic differences in the content of false news, rumors, conspiracies, etc. Those topics are frequently based on national news agendas, i.e. producers of disinformation address current national or world events (e.g. elections, immigration, etc.) (Humprecht, 2019). Field of application/theoretical foundation: Topics are a central yet under-researched aspect of research on online disinformation (Freelon & Wells, 2020). The research interest is to find out which topics are taken up and spread by disinformation producers. The focus of this research is both on specific key topics for which sub-themes are identified (e.g. elections, climate change, Covid-19) and, more generally, on the question of which misleading content is disseminated (mostly on social media). Methodologically, the identification of topics is often a first step followed by further analysis of the content (Ferrara, 2017). Thus, the analysis of topics is linked to the detection of disinformation, which represents a methodological challenge. Topics can be identified inductively or deductively. Inductive analyses often use a data corpus, for example social media data, and try to identify topics using techniques such as topic modelling (e.g. Boberg et al., 2020). Deductive analyses frequently use topic lists to classify contents. Topics lists are initially created based on the literature on the respective topic or with the help of databases, e.g. by fact-checkers. References/combination with other methods of data collection: Studies on topics of disinformation use both manual and automated content analysis or combinations of both to investigate the occurrence of different topics in texts (Boberg et al., 2020; Bradshaw, Howard, Kollanyi, & Neudert, 2020). Inductive and deductive approaches have been combined with qualitative text analyses to identify topic categories which are subsequently coded (Humprecht, 2019; Marchal, Kollanyi, Neudert, & Howard, 2019). Example studies: Ferrara (2017); Humprecht (2019), Marchal et al. (2019)   Table 1. Summary of selected studies Author(s) Sample Values Reliability Ferrara (2017) Content type: Tweets Sampling period: April 27, 2017 to May 7, 201 Sample size: 16.65 million tweets Sampling: List of 23 key words and top 20 hashtags Keywords: France2017, Marine2017, AuNomDuPeuple, FrenchElection, FrenchElections, Macron, LePen, MarineLePen, FrenchPresidentialElection, JeChoisisMarine, JeVoteMarine, JeVoteMacron JeVote, Presidentielle2017, ElectionFracaise, JamaisMacron, Macron2017, EnMarche, MacronPresident Hashtags: #Macron, #Presidentielle2017, #fn, #JeVote, #LePen, #France, #2017LeDebat, #MacronLeaks, #Marine2017, #debat2017, #2017LeDĂ©bat, #MacronGate, #MarineLePen, #Whirlpool, #EnMarche, #JeVoteMacron, #MacronPresident, #JamaisMacron, #FrenchElection - Humprecht (2019) Content type: fact checks Outlet/ country: 2 fact checkers per country (AT, DE, UK, US) Sampling period: June 1, 2016 to September 30, 2017 Sample size: N=651 Unit of analysis: story/ fact-check No. of topics coded: main topic per fact-check Level of analysis: fact checks and fact-checker conspiracy theory, education, election campaign, environment, government/public administration (at the time when the story was published), health, immigration/integration, justice/crime, labor/employment, macroeconomics/economic regulation, media/journalism, science/ technology, war/terror, others Krippendorff’s alpha = 0.71 Marchal et al. (2019) Content type: tweets related to the European elections 2019 Sampling: hashtags in English, Catalan, French, German, Italian, Polish, Spanish, Swedish Sampling criteria: (1) contained at least one of the relevant hashtags; (2) contained the hashtag in the URL shared, or the title of its webpage; (3) were a retweet of a message that contained a relevant hashtag or mention in the original message; (4) were a quoted tweet referring to a tweet with a relevant hashtag or mention Sampling period: 5 April and 20 April, 2019 Sample size: 584,062 tweets from 187,743 unique users Religion Islam (Muslim, Islam, Hijab, Halal, Muslima, Minaret) Religion Christianity (Christianity, Church, Priest) Immigration (Asylum Seeker, Refugee, Migrants, Child Migrant, Dual Citizenship, Social Integration) Terrorism (ISIS, Djihad, Terrorism, Terrorist Attack) Political Figures/Parties (Vladimir Putin, Enrico Mezzetti, Emmanuel Macron, ANPI, Arnold van Doorn, Islamic Party for Unity, Nordic Resistance Movement) Celebrities (Lara Trump, Alba Parietti) Crime (Vandalism, Rape, Sexual Assault, Fraud, Murder, Honour Killing) Notre-Dame Fire (Notre-Dame Fire, Reconstruction) Political Ideology (Anti-Fascism, Fascism, Nationalism) Social Issues (Abortion, Bullying, Birth Rate) -   References Boberg, S., Quandt, T., Schatto-Eckrodt, T., & Frischlich, L. (2020). Pandemic Populism: Facebook Pages of Alternative News Media and the Corona Crisis -- A Computational Content Analysis, 2019. Retrieved from http://arxiv.org/abs/2004.02566 Bradshaw, S., Howard, P. N., Kollanyi, B., & Neudert, L. M. (2020). Sourcing and Automation of Political News and Information over Social Media in the United States, 2016-2018. Political Communication, 37(2), 173–193. https://doi.org/10.1080/10584609.2019.1663322 Ferrara, E. (2017). Disinformation and social bot operations in the run up to the 2017 French presidential election. First Monday, 22(8). https://doi.org/10.5210/FM.V22I8.8005 Freelon, D., & Wells, C. (2020). Disinformation as Political Communication. Political Communication, 37(2), 145–156. https://doi.org/10.1080/10584609.2020.1723755 Humprecht, E. (2019). Where ‘fake news’ flourishes: a comparison across four Western democracies. Information Communication and Society, 22(13), 1973–1988. https://doi.org/10.1080/1369118X.2018.1474241 Marchal, N., Kollanyi, B., Neudert, L., & Howard, P. N. (2019). Junk News During the EU Parliamentary Elections?: Lessons from a Seven-Language Study of Twitter and Facebook. Oxford, UK. Retrieved from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/05/EU-Data-Memo.pd

    Perceived Exposure to Misinformation and Trust in Institutions in Four Countries Before and During a Pandemic

    Full text link
    Misinformation could undermine trust in institutions during a critical period when people require updated information about a pandemic and credible information to make informed voting decisions. This article uses survey data collected in 2019 (n = 6,300) and 2021 (n = 6,000) in the United States, the United Kingdom, France, and Canada to examine the relationship between perceived exposure to misinformation and trust in national news media and the national/federal government. We do not find that perceived exposure to misinformation undermines trust. We test whether these relationships differ for those with left-wing versus right-wing views, by country, period, or electoral context

    Types (Disinformation)

    Get PDF
    Disinformation can appear in various forms. Firstly, different formats can be manipulated, such as texts, images, and videos. Secondly, the amount and degree of falseness can vary, from completely fabricated content to decontextualized information to satire that intentionally misleads recipients. Therefore, the forms and format of disinformation might vary and differ not only between the supposedly clear categories of “true” and “false”. Field of application/theoretical foundation: Studies on types of disinformation are conducted in various fields, e.g. political communication, journalism studies, and media effects studies. Among other things, the studies identify the most common types of mis- or disinformation during certain events (Brennen, Simon, Howard, & Nielsen, 2020), analyze and categorize the behavior of different types of Twitter accounts (Linvill & Warren, 2020), and investigate the existence of serveral types of “junk news” in different national media landscapes (Bradshaw, Howard, Kollanyi, & Neudert, 2020; Neudert, Howard, & Kollanyi, 2019). References/combination with other methods of data collection: Only relatively few studies use combinations of methods. Some studies identify different types of disinformation via qualitative and quantitative content analyses (Bradshaw et al., 2020; Brennen et al., 2020; Linvill & Warren, 2020; Neudert et al., 2019). Others use surveys to analyze respondents’ concerns as well as exposure towards different types of mis- and disinformation (Fletcher, 2018). Example studies: Brennen et al. (2020); Bradshaw et al. (2020); Linvill and Warren (2020)   Information on example studies: Types of disinformation are defined by the presentation and contextualization of content and sometimes additionally by details (e.g. professionalism) about the communicator. Studies either deductively identify different types of disinformation (Brennen et al., 2020) by applying the theoretical framework by Wardle (2019), or additionally inductively identify and build different categories based on content analyses (Bradshaw et al., 2020; Linvill & Warren, 2020).   Table 1. Types of mis-/disinformation by Brennen et al. (2020) Category Specification Satire or parody - False connection Headlines, visuals or captions don’t support the content Misleading content Misleading use of information to frame an issue or individual, when facts/information are misrepresented or skewed False context Genuine content is shared with false contextual information, e.g. real images which have been taken out of context Imposter content Genuine sources, e.g. news outlets or government agencies, are impersonated Fabricated content Content is made up and 100% false; designed to deceive and do harm Manipulated content Genuine information or imagery is manipulated to deceive, e.g. deepfakes or other kinds of manipulation of audio and/or visuals Note. The categories are adapted from the theoretical framework by Wardle (2019). The coding instruction was: “To the best of your ability, what type of misinformation is it? (Select one that fits best.)” (Brennen et al., 2020, p. 12). The coders reached an intercoder reliability of a Cohen’s kappa of 0.82.   Table 2. Criteria for the “junk news” label by Bradshaw et al. (2020) Criteria Reference Specification Professionalism refers to the information about authors and the organization “Sources do not employ the standards and best practices of professional journalism, including information about real authors, editors, and owners” (pp. 174-175). “Distinct from other forms of user-generated content and citizen journalism, junk news domains satisfy the professionalism criterion because they purposefully refrain from providing clear information about real authors, editors, publishers, and owners, and they do not publish corrections of debunked information” (p. 176). Procedure: -        Systematically checked the about pages of domains: Contact information, information about ownership and editors, and other information relating to professional standards -        Reviewed whether the sources appeared in third-party fact-checking reports -        Checked whether sources published corrections of fact-checked reporting. Examples: zerohedge.com, conservative- fighters.org, deepstatenation.news Counterfeit refers to the layout and design of the domain itself “(
) [S]ources mimic established news reporting by using certain fonts, having branding, and employing content strategies. (
) Junk news is stylistically disguised as professional news by the inclusion of references to news agencies and credible sources as well as headlines written in a news tone with date, time, and location stamps. In the most extreme cases, outlets will copy logos and counterfeit entire domains” (p. 176). Procedure: -        Systematically reviewed organizational information about the owner and headquarters by checking sources like Wikipedia, the WHOIS database, and third-party fact-checkers (like Politico or MediaBiasFactCheck) -        Consulted country-specific expert knowledge of the media landscape in the US to identify counterfeiting websites. Examples: politicoinfo.com, NBC.com.co Style refers to the content of the domain as a whole “ (
) [S]tyle is concerned with the literary devices and language used throughout news reporting. (
) Designed to systematically manipulate users for political purposes, junk news sources deploy propaganda techniques to persuade users at an emotional, rather than cognitive, level and employ techniques that include using emotionally driven language with emotive expressions and symbolism, ad hominem attacks, misleading headlines, exaggeration, excessive capitalization, unsafe generalizations, logical fallacies, moving images and lots of pictures or mobilizing memes, and innuendo (Bernays, 1928; Jowette & O’Donnell, 2012; Taylor, 2003). (
) Stylistically, problematic sources will employ propaganda and clickbait techniques to varying degrees. As a result, determining style can be highly complex and context dependent” (p. 177). Procedure: -        Examined at least five stories on the front page of each news source in depth during the US presidential campaign in 2016 and the SOTU address in 2018 -        Checked the headlines of the stories and the content of the articles for literary and visual propaganda devices -        Considered as stylistically problematic if three of the five stories systematically exhibited elements of propaganda Examples: 100percentfedup.com, barenakedislam.com, theconservativetribune.com, dangerandplay.com Credibility refers to the content of the domain as a whole “(
) [S]ources rely on false information or conspiracy theories and do not post corrections” (p. 175). “[They] typically report on unsubstantiated claims and rely on conspiratorial and dubious sources. (
) Junk news sources that satisfy the credibility criterion frequently fail to vet their sources, do not consult multiple sources, and do not fact-check” (p. 178). Procedure: -        Examined at least five front page stories and reviewed the sources that were cited -        Reviewed pages to see if they included known conspiracy theories on issues such as climate change, vaccination, and “Pizzagate” -        Checked third-party fact-checkers for evidence of debunked stories and conspiracy theories Examples: infowars.com, endingthefed.com, thegatewaypundit.com, newspunch.com Bias refers to the content of the domain as a whole “(
) [H]yper-partisan media websites and blogs (
) are highly biased, ideologically skewed, and publish opinion pieces as news. Basing their stories on the same events, these sources manage to convey strikingly different impressions of what actually transpired. It is such systematic differences in the mapping from facts to news reports that we call bias. (
) Bias exists on both sides of the political spectrum. Like determining style, determining bias can be highly complex and context dependent” (pp. 177-178). Procedure: -        Checked third-party sources that systematically evaluate media bias -        If the domain was not evaluated by a third party, the authors examined the ideological leaning of the sources used to support stories appearing on the domain -        Evaluation of the labeling of politicians (are there differences between the left and the right?) -        Identified bias created through the omission of unfavorable facts, or through writing that is falsely presented as being objective Examples on the right: breitbart.com, dailycaller.com, infowars.com, truthfeed.com Examples on the left: occupydemocrats.com, addictinginfo.com, bipartisanreport.com Note. The coders reached an intercoder reliability of a Krippendorff’s kappa of 0.89. The label of “junk news” is defined by fulfilling at least three of the five criteria. It refers to sources that deliberately publish misleading, deceptive, or incorrect information packaged as real news.   Table 3. Identified types of IRA-associated Twitter accounts by Linvill and Warren (2020) Category Specification Right troll “Twitter-handles broadcast nativist and right-leaning populist messages. These handles’ themes were distinct from mainstream Republicanism. (
) They rarely broadcast traditionally important Republican themes, such as taxes, abortion, and regulation, but often sent divisive messages about mainstream and moderate Republicans. (
) The overwhelming majority of handles, however, had limited identifying information, with profile pictures typically of attractive, young women” (p. 5). Hashtags frequently used by these accounts: #MAGA (i.e., “Make America Great Again,”), #tcot (i.e. “Top Conservative on Twitter), #AmericaFirst, and #IslamKills Left troll “These handles sent socially liberal messages, with an overwhelming focus on cultural identity. (
) They discussed gender and sexual identity (e.g., #LGBTQ) and religious identity (e.g., #MuslimBan), but primarily focused on racial identity. Just as the Right Troll handles attacked mainstream Republican politicians, Left Troll handles attacked mainstream Democratic politicians, particularly Hillary Clinton. (
) It is worth noting that this account type also included a substantial portion of messages which had no clear political motivation” (p. 6). Hashtags frequently used by these accounts: #BlackLivesMatter, #PoliceBrutality, and #BlackSkinIsNotACrime Newsfeed “These handles overwhelmingly presented themselves as U.S. local news aggregators and had descriptive names (
). These accounts linked to legitimate regional news sources and tweeted about issues of local interest (
). A small number of these handles, (
) tweeted about global issues, often with a pro-Russia perspective” (p. 6). Hashtags frequently used by these accounts: #news, #sports, and #local Hashtag gamer “These handles are dedicated almost entirely to playing hashtag games, a popular word game played on Twitter. Users add a hashtag to a tweet (e.g., #ThingsILearnedFromCartoons) and then answer the implied question. These handles also posted tweets that seemed organizational regarding these games (
). Like some tweets from Left Trolls, it is possible such tweets were employed as a form of camouflage, as a means of accruing followers, or both. Other tweets, however, often using the same hashtag as mundane tweets, were socially divisive (
)” (p. 7). Hashtags frequently used by these accounts: #ToDoListBeforeChristmas, #ThingsYouCantIgnore, #MustBeBanned, and #2016In4Words Fearmonger “These accounts spread disinformation regarding fabricated crisis events, both in the U.S. and abroad. Such events included non-existent outbreaks of Ebola in Atlanta and Salmonella in New York, an explosion at the Columbian Chemicals plan in Louisiana, a phosphorus leak in Idaho, as well as nuclear plant accidents and war crimes perpetrated in Ukraine. (
) These accounts typically tweeted a great deal of innocent, often frivolous content (i.e. song lyrics or lines of poetry) which were potentially automated. With this content these accounts often added popular hashtags such as #love (
) and #rap (
). These accounts changed behavior sporadically to tweet disinformation, and that output was produced using a different Twitter client than the one used to produce the frivolous content. (
) The Fearmonger category was the only category where we observed some inconsistency in account activity. A small number of handles tweeted briefly in a manner consistent with the Right Troll category but switched to tweeting as a Fearmonger or vice-versa” (p. 7). Hashtags frequently used by these accounts: #Fukushima2015 and #ColumbianChemicals Note. The categories were identified qualitatively analyzing the content produced and were then refined and explored more detailed via a quantitative analysis. The coders reached a Krippendorff’s alpha intercoder-reliability of 0.92.   References Bradshaw, S., Howard, P. N., Kollanyi, B., & Neudert, L.?M. (2020). Sourcing and automation of political news and information over social media in the United States, 2016-2018. Political Communication, 37(2), 173–193. Brennen, J. S., Simon, F. M., Howard, P. N. [P. N.], & Nielsen, R. K. (2020). Types, sources, and claims of covid-19 misinformation. Reuters Institute. Retrieved from http://www.primaonline.it/wp-content/uploads/2020/04/COVID-19_reuters.pdf Fletcher, R. (2018). Misinformation and disinformation unpacked. Reuters Institute. Retrieved from http://www.digitalnewsreport.org/survey/2018/misinformation-and-disinformation-unpacked/ Linvill, D. L., & Warren, P. L. (2020). Troll factories: Manufacturing specialized disinformation on Twitter. Political Communication, 1–21. Neudert, L.?M., Howard, P., & Kollanyi, B. (2019). Sourcing and automation of political news and information during three European elections. Social Media + Society, 5(3). https://doi.org/10.1177/2056305119863147 Wardle, C. (2019). First Draft's essential guide to understanding information disorder. UK: First Draft News. Retrieved from https://firstdraftnews.org/wp-content/uploads/2019/10/Information_Disorder_Digital_AW.pdf?x7670

    COVID-19 misinformation on YouTube: An analysis of its impact and subsequent online information searches for verification

    Full text link
    Objectives COVID-19 vaccination misinformation on YouTube can have negative effects on users. Some, after being exposed to such misinformation, may search online for information that either debunks or confirms it. This study's objective is to examine the impact of YouTube videos spreading misinformation about COVID-19 vaccination and the influencing variables, as well as subsequent information seeking and its effect on attitudes toward vaccination. Methods In this observational and survey study, we used a three-group pre-test and post-tests design ( N = 106 participants). We examined the effects of YouTube videos containing misinformation about COVID-19 vaccination on attitudes toward vaccination via surveys, employed screen recordings with integrated eye tracks to examine subsequent online information searches, and again surveyed participants to examine the effects of the individual searches on their attitudes. Results Receiving misinformation via video tended to have negative effects, mostly on unvaccinated participants. After watching the video, they believed and trusted less in the effectiveness of the vaccines. Internet searches led to more positive attitudes toward vaccination, regardless of vaccination status or prior beliefs. The valences of search words entered and search duration were independent of the participants’ prior attitudes. Misinforming content was rarely selected and perceived (read). In general, participants were more likely to perceive supportive and mostly neutral information about vaccination. Conclusion Misinformation about COVID-19 vaccination on YouTube can have a negative impact on recipients. Unvaccinated citizens in particular are a vulnerable group to online misinformation; therefore, it is important to take action against misinformation on YouTube. One approach could be to motivate users to verify online content by doing their own information search on the internet, which led to positive results in the study

    Degrees of deception: the effects of different types of COVID-19 misinformation and the effectiveness of corrective information in crisis times

    Full text link
    Responding to widespread concerns about misinformation’s impact on democracy, we conducted an experiment in which we exposed German participants to different degrees of misinformation on COVID-19 connected to politicized (immigration) and apolitical (runners) issues (N = 1,490). Our key findings show that partially false information is more credible and persuasive than completely false information, and also more difficult to correct. People with congruent prior attitudes are more likely to perceive misinformation as credible and agree with its positions than people with incongruent prior attitudes. We further show that although fact-checkers can lower the perceived credibility of misinformation on both runners and migrants, corrective messages do not affect attitudes toward migrants. As a key contribution, we show that different degrees of misinformation can have different impacts: more nuanced deviations from facticity may be more harmful as they are difficult to detect and correct while being more credible
    • 

    corecore